We present a framework for visual teach-and-repeat (VTR) navigation designed to operate robustly in environments characterized by variable or low light levels. First, we show that navigation accuracy for VTR can be improved by integrating a topological map with a decision-making strategy designed to reduce latencies and trajectory error. Specifically, a local scene descriptor, acquired through deep learning, is coupled with stereo camera imaging and a proportional-integral controller to compensate for inaccuracies in visual matching. This approach facilitates accurate teach-and-repeat navigation with correction for odometry drift with respect to both orientation and along-route error accumulation using only monocular images during route following. Next, we adapt this general approach to operate with an off-the-shelf event-based camera and an event-based local descriptor model. Experiments in a night-time urban environment demonstrate that this event-based system provides improved and robust navigation accuracy in low-light environments when compared with a conventional camera paired with a state-of-the-art RGB-based descriptor model. Overall, high trajectory accuracy is demonstrated for VTR navigation in both indoor and outdoor environments using deep-learned descriptors, whilst the extension to event-based vision extends the capability of VTR navigation to a wider range of challenging environments.
Loading....